[nit] glmasr should be in AutoModelForMultimodalLM#45670
[nit] glmasr should be in AutoModelForMultimodalLM#45670
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
[For maintainers] Suggested jobs to run (before merge) run-slow: auto |
| [ | ||
| ("cohere_asr", "CohereAsrForConditionalGeneration"), | ||
| ("dia", "DiaForConditionalGeneration"), | ||
| ("glmasr", "GlmAsrForConditionalGeneration"), |
There was a problem hiding this comment.
i think it is also missing audio/music Flamingo for generation, those are now mapped in AutoModel 🙈
|
Btw, i remembered one thing. Not sure if models are actually seq2seq, but the usage snippet on the hub currently uses I think we need to either delete them from |
|
Thanks for the step back here, you totally right, this (as the others are) are not seq2seq. Never looked deep enough in this, the overall mapping for audio models makes little sence I guess. Taking a deeper look now and will come up with a follow up |
What does this PR do?
As per title.